Download A parallel 3D digital wave guide mesh model with tetrahedral topology for room acoustic simulation
Following a summary of the basic principles of 3D waveguide mesh modelling and the context of its application to room acoustic simulation, this paper presents a detailed analysis of the tetrahedral mesh topology and describes its implementation on a parallel computer model. Its structural characteristics are analysed, with particular emphasis on how they influence execution speed. Performance deterioration due to communication overhead in the parallelised model is discussed. Theoretical predictions are compared with data from performance tests carried out on different computer platforms and both are contrasted with the corresponding results from the rectilinear model, in order to assess the practical efficiency of the model. Objective validation tests are reported and discussed.
Download Real-time Auralisation System for Virtual Microphone Positioning
A computer application was developed to simulate the process of microphone positioning in sound recording applications. A dense, regular grid of impulse responses pre-recorded on the region of the room under study allowed the sound captured by a virtual microphone to be auralised through real-time convolution with an anechoic stream representing the sound source. Convolution was performed using a block-based variation on the overlap-add method where the summation of many small subconvolutions produced each block of output data samples. As the applied RIR filter varied on successive audio output blocks, a short cross fade was applied to avoid glitches in the audio. The maximum possible length of impulse response applied was governed by the size of audio processing block (hence latency) employed by the program. Larger blocks allowed a lower processing time per sample. At 23.2ms latency (1024 samples at 44.1kHz), it was possible to apply 9 second impulse responses on a standard laptop computer.
Download Real-Time Dynamic Image-Source Implementation For Auralisation
This paper describes a software package for auralisation in interactive virtual reality environments. Its purpose is to reproduce, in real time, the 3D soundfield within a virtual room where listener and sound sources can be moved freely. Output sound is presented binaurally using headphones. Auralisation is based on geometric acoustic models combined with head-related transfer functions (HRTFs): the direct sound and reflections from each source are computed dynamically by the image-source method. Directional cues are obtained by filtering these incoming sounds by the HRTFs corresponding to their propagation directions relative to the listener, computed on the basis of the information provided by a head-tracking device. Two interactive real-time applications were developed to demonstrate the operation of this software package. Both provide a visual representation of listener (position and head orientation) and sources (including image sources). One focusses on the auralisation-visualisation synchrony and the other on the dynamic calculation of reflection paths. Computational performance results of the auralisation system are presented.
Download Immersive audio-guiding
An audio-guide prototype was developed which makes it possible to associate virtual sound sources to tourist route focal points. An augmented reality effect is created, as the (virtual) audio content presented through headphones seems to originate from the specified (real) points. A route management application allows specification of source positions (GPS coordinates), audio content (monophonic files) and route points where playback should be triggered. The binaural spatialisation effects depend on user pose relative to the focal points: position is detected by a GPS receiver; for head-tracking, an IMU is attached to the headphone strap. The main application, developed in C++, streams the audio content through a real-time auralisation engine. HRTF filters are selected according to the azimuth and elevation of the path from the virtual source, continuously updated based on user pose. Preliminary tests carried out with ten subjects confirmed the ability to provide the desired audio spatialisation effects and identified position detection accuracy as the main aspect to be improved in the future.